synthetic face
SynMorph: Generating Synthetic Face Morphing Dataset with Mated Samples
Zhang, Haoyu, Ramachandra, Raghavendra, Raja, Kiran, Busch, Christoph
Abstract--Face morphing attack detection (MAD) algorithms have become essential to overcome the vulnerability of face recognition systems. To solve the lack of large-scale and public-available datasets due to privacy concerns and restrictions, in this work we propose a new method to generate a synthetic face morphing dataset with 2450 identities and more than 100k morphs. The proposed synthetic face morphing dataset is unique for its high-quality samples, different types of morphing algorithms, and the generalization for both single and differential morphing attack detection algorithms. For experiments, we apply face image quality assessment and vulnerability analysis to evaluate the proposed synthetic face morphing dataset from the perspective of biometric sample quality and morphing attack potential on face recognition systems. The results are benchmarked with an existing SOTA synthetic dataset and a representative non-synthetic and indicate improvement compared with the SOTA. Additionally, we design different protocols and study the applicability of using the proposed synthetic dataset on training morphing attack detection algorithms. Nonetheless, with the improvement develop generalized and robust MAD algorithms and testing of FRS in generalization and the development datasets to evaluate and benchmark existing algorithms of image manipulation techniques, it is also shown that from different developers. However, due to privacy regulations, FRS is vulnerable to various types of attacks [2] [3]. Hence, face samples are considered sensitive data, which it is essential to develop corresponding attack detection makes it challenging to collect the dataset on a large scale algorithms to protect the FRS from potential attacks.
- North America > United States (0.14)
- Europe > Norway (0.04)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
ChatGPT and biometrics: an assessment of face recognition, gender detection, and age estimation capabilities
Hassanpour, Ahmad, Kowsari, Yasamin, Shahreza, Hatef Otroshi, Yang, Bian, Marcel, Sebastien
This paper explores the application of large language models (LLMs), like ChatGPT, for biometric tasks. We specifically examine the capabilities of ChatGPT in performing biometric-related tasks, with an emphasis on face recognition, gender detection, and age estimation. Since biometrics are considered as sensitive information, ChatGPT avoids answering direct prompts, and thus we crafted a prompting strategy to bypass its safeguard and evaluate the capabilities for biometrics tasks. Our study reveals that ChatGPT recognizes facial identities and differentiates between two facial images with considerable accuracy. Additionally, experimental results demonstrate remarkable performance in gender detection and reasonable accuracy for the age estimation tasks. Our findings shed light on the promising potentials in the application of LLMs and foundation models for biometrics.
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Europe > Norway (0.04)
- Asia > Middle East > Iran (0.04)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
BLADERUNNER: Rapid Countermeasure for Synthetic (AI-Generated) StyleGAN Faces
StyleGAN is the open-sourced TensorFlow implementation made by NVIDIA. It has revolutionized high quality facial image generation. However, this democratization of Artificial Intelligence / Machine Learning (AI/ML) algorithms has enabled hostile threat actors to establish cyber personas or sock-puppet accounts in social media platforms. These ultra-realistic synthetic faces. This report surveys the relevance of AI/ML with respect to Cyber & Information Operations. The proliferation of AI/ML algorithms has led to a rise in DeepFakes and inauthentic social media accounts. Threats are analyzed within the Strategic and Operational Environments. Existing methods of identifying synthetic faces exists, but they rely on human beings to visually scrutinize each photo for inconsistencies. However, through use of the DLIB 68-landmark pre-trained file, it is possible to analyze and detect synthetic faces by exploiting repetitive behaviors in StyleGAN images. Project Blade Runner encompasses two scripts necessary to counter StyleGAN images. Through PapersPlease acting as the analyzer, it is possible to derive indicators-of-attack (IOA) from scraped image samples. These IOAs can be fed back into AmongUs acting as the detector to identify synthetic faces from live operational samples. The opensource copy of Blade Runner may lack additional unit tests and some functionality, but the open-source copy is a redacted version, far leaner, better optimized, and a proof-of-concept for the information security community. The desired end-state will be to incrementally add automation to stay on-par with its closed-source predecessor.
- Europe > Estonia > Harju County > Tallinn (0.05)
- Europe > Ukraine (0.05)
- North America > United States > Indiana (0.04)
- (5 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (2 more...)
'Degraded' Synthetic Faces Could Help Improve Facial Image Recognition
Researchers from Michigan State University have devised a way for synthetic faces to take a break from the deepfakes scene and do some good in the world – by helping image recognition systems to become more accurate. The new controllable face synthesis module (CFSM) they've devised is capable of regenerating faces in the style of real-world video surveillance footage, rather than relying on the uniformly higher-quality images used in popular open source datasets of celebrities, which do not reflect all the faults and shortcomings of genuine CCTV systems, such as facial blur, low resolution, and sensor noise – factors that can affect recognition accuracy. CFSM is not intended specifically to authentically simulate head poses, expressions, or all the other usual traits that are the objective of deepfake systems, but rather to generate a range of alternative views in the style of the target recognition system, using style transfer. The system is designed to mimic the style domain of the target system, and to adapt its output according to the resolution and range of'eccentricities' therein. The use-case includes legacy systems that are not likely to be upgraded due to cost, but which can currently contribute little to the new generation of facial recognition technologies, due to poor quality of output that may once have been leading-edge.
- North America > United States > Michigan (0.25)
- Asia > China > Hong Kong (0.05)
Synthesis AI raises $17M to generate synthetic data – TechCrunch
Synthesis AI, a startup developing a platform that generates synthetic data to train AI systems, today announced that it raised $17 million in a Series A funding round led by 468 Capital with participation from Sorenson Ventures and Strawberry Creek Ventures, Bee Partners, PJC, iRobot Ventures, Boom Capital and Kubera Venture Capital. CEO and cofounder Yashar Behzadi says that the proceeds will be put toward product R&D, growing the company's team, and expanding research -- particularly in the area of mixed real and synthetic data. Synthetic data, or data that's created artificially rather than captured from the real world, is coming into wider use in data science as the demand for AI systems grows. The benefits are obvious: While collecting real-world data to develop an AI system is costly and labor-intensive, a theoretically infinite amount of synthetic data can be generated to fit any criteria. For example, a developer could use synthetic images of cars and other vehicles to develop a system that can differentiate between makes and models.
- North America > Canada > Alberta (0.15)
- North America > United States > Arizona (0.05)
- Information Technology > Artificial Intelligence > Vision (0.53)
- Information Technology > Artificial Intelligence > Robots (0.48)
The Best Machine Learning Company of 2021
We had a lot of developments with multiple tops and turns. The sheer number and quality of the multiple papers and outcomes released in the ML space were amazing. We had innovations in GPU, newer models, lots of research into different fields, and some ground-breaking discoveries. The Machine Learning industry continued to grow by leaps and bounds. Here are some interesting stats.
Humans struggle to distinguish between real and AI-generated faces
According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not. "Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable--and more trustworthy--than real faces," the researchers explained. Video, audio, text, and imagery generated by generative adversarial networks (GANs) are increasingly being used for nonconsensual intimate imagery, financial fraud, and disinformation campaigns. The generator will start with random pixels and will keep improving the image to avoid penalisation from the discriminator. This process continues until the discriminator can no longer distinguish a synthesised face from a real one.
- North America > United States > California (0.06)
- North America > Canada > Ontario > Middlesex County > London (0.06)
- Europe > Ukraine (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- Media (0.78)
- Government (0.60)
A.I. face study reveals a shocking new tipping point for humans
Computers have become very, very good at generating photorealistic images of human faces. What could possibly go wrong? A study published last week in the academic journal Proceedings of the National Academy of Sciences confirms just how convincing "faces" produced by artificial intelligence can be. In that study, more than 300 research participants were asked to determine whether a supplied image was a photo of a real person or a fake generated by an A.I. The human participants got it right less than half the time.
- North America > United States (0.36)
- Europe > Ukraine (0.05)
- Media (0.74)
- Government > Regional Government > North America Government > United States Government (0.36)
Fake faces created by AI look MORE trustworthy than real people, study reveals
Fake faces created by artificial intelligence (AI) look more trustworthy than faces of real people, a worrying new study reveals. Researchers conducted several experiments to see whether fake faces created by machine learning frameworks were able to fool humans. They found synthetically generated faces are not only highly photo realistic, but are also nearly indistinguishable from real faces - and are even judged to be more trustworthy. Due to the results, the researchers are calling for safeguards to prevent'deepfakes' from circulating online. Deepfakes have already been used for so-called'revenge porn', fraud and propaganda, leading to misplaced identity and the spread of fake news.
AI-synthesized faces are indistinguishable from real faces and more trustworthy
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable--and more trustworthy--than real faces. Artificial intelligence (AI)–powered audio, image, and video synthesis--so-called deep fakes--has democratized access to previously exclusive Hollywood-grade, special effects technology. From synthesizing speech in anyone's voice (1) to synthesizing an image of a fictional person (2) and swapping one person's identity with another or altering what they are saying in a video (3), AI-synthesized content holds the power to entertain but also deceive. Generative adversarial networks (GANs) are popular mechanisms for synthesizing content.
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.71)